News by sections
ESG

News by region
Issue archives
Archive section
Multimedia
Videos
Search site
Features
Interviews
Country profiles
Generic business image for news article Image: AdobeStock/NicoElNino

26 October 2023
UK
Reporter Bob Currie

Share this article





UK regulators report back on AI and Machine Learning consultation

The Bank of England has published a feedback statement responding to findings of its public consultation on artificial intelligence and machine learning which was launched in October 2022.

As a foundation for this consultation process, the UK supervisory authorities — including the Prudential Regulatory Authority and the Financial Conduct Authority — published a discussion paper at this time, entitled Artificial Intelligence and Machine Learning (DP 5/22), which aims to further their understanding of how AI may impact their objectives as regulators for the prudential and conduct supervision of financial firms.

In the feedback statement, released today, these financial regulators have issued an anonymised and aggregated summary of the consultation feedback, which was received from 54 respondents spanning a wide range of stakeholders.

Among its key findings, respondents indicated in their feedback that a single regulatory definition of AI would not be useful. They highlighted use of a range of alternative, principles-based or risk-based approaches to how AI is defined and did not feel that a single regulatory definition could capture all of the relevant characteristics of AI models and the potential benefits and risks that these present.

The feedback statement highlights the importance of cross-industry engagement, including the work done by bodies including the AI Public Private Forum (AIPPF). Respondents highlighted that initiatives such as the AIPPF could serve as templates for ongoing public-private engagement.

Given the complexity of the global landscape in which AI models and AI-based technology operate, respondents highlighted the need for greater cooperation and alignment between regulators worldwide.

A primary focus of this regulatory dialogue should be on consumer outcomes, taking into account ethical implications and how the outcomes delivered by AI may impact “fairness” across economy and society.

Given these complexities, a “joined up” approach to managing and mitigating AI risks across business units and functions is important, particularly in promoting closer collaboration between data management and model risk management teams.

In terms of risk mitigation for the banking sector, respondents indicated that the principles contained in Consultation Paper 6/22, Model Risk Management for Banks, are generally sufficient to cover AI model risk. However, respondents highlighted areas where these principles could be clarified or reinforced to improve their efficacy.

At company level, respondents indicated that existing governance structures, and regulatory frameworks such as the Senior Managers and Certification Regime (SM&CR), are currently sufficient to address AI risks identified in the DP5/22 and considered by the consultation process.

Advertisement
Get in touch
News
More sections
Black Knight Media